While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power. Training dynamics, i.e., the traces left by iterations of optimization algorithms, have recently been proved to be effective to localize mislabeled samples with hand-crafted features. In this paper, beyond manually designed features, we introduce a novel learning-based solution, leveraging a noise detector, instanced by an LSTM network, which learns to predict whether a sample was mislabeled using the raw training dynamics as input. Specifically, the proposed method trains the noise detector in a supervised manner using the dataset with synthesized label noises and can adapt to various datasets (either naturally or synthesized label-noised) without retraining. We conduct extensive experiments to evaluate the proposed method. We train the noise detector based on the synthesized label-noised CIFAR dataset and test such noise detector on Tiny ImageNet, CUB-200, Caltech-256, WebVision and Clothing1M. Results show that the proposed method precisely detects mislabeled samples on various datasets without further adaptation, and outperforms state-of-the-art methods. Besides, more experiments demonstrate that the mislabel identification can guide a label correction, namely data debugging, providing orthogonal improvements of algorithm-centric state-of-the-art techniques from the data aspect.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
许多现有的自动驾驶范式涉及多个任务的多个阶段离散管道。为了更好地预测控制信号并增强用户安全性,希望从联合时空特征学习中受益的端到端方法是可取的。尽管基于激光雷达的输入或隐式设计有一些开创性的作品,但在本文中,我们在可解释的基于视觉的设置中提出了问题。特别是,我们提出了一种空间性特征学习方案,以同时同时进行感知,预测和计划任务的一组更具代表性的特征,称为ST-P3。具体而言,提出了一种以自我为中心的积累技术来保留3D空间中的几何信息,然后才能感知鸟类视图转化。设计了双重途径建模,以考虑将来的预测,以将过去的运动变化考虑到过去。引入了基于时间的精炼单元,以弥补识别基于视觉的计划的元素。据我们所知,我们是第一个系统地研究基于端视力的自主驾驶系统的每个部分。我们在开环Nuscenes数据集和闭环CARLA模拟上对以前的最先进的方法进行基准测试。结果显示了我们方法的有效性。源代码,模型和协议详细信息可在https://github.com/openperceptionx/st-p3上公开获得。
translated by 谷歌翻译
音频命令是一种首选的沟通媒介,可将检查员保持在半自治无人机进行的民用基础设施检查环境中。为了了解一组异质和动态检查员的特定工作命令,需要为小组成本开发一个模型,并在组更改时很容易适应。本文的动机是建立一个具有股票分布的架构的多任务深度学习模型。该体系结构允许两个分类任务共享功能提取器,然后通过功能投影和协作培训在提取功能中交织在一起的特定主题和关键字特定功能。一组五个授权主题的基本模型对本研究收集的检查关键字数据集进行了培训和测试。该模型在分类任何授权检查员的关键字时达到了95.3%或更高的平均准确性。它在扬声器分类中的平均准确性为99.2%。由于该模型从合并的培训数据中学习的更丰富的关键字表示,因此将基本模型调整为新检查员只需要该检查员的少量培训数据,例如每个关键字五个话语。在验证授权检查员和76.1 \%的检测中,使用说话者分类分数进行检查员验证可以达到至少93.9%的成功率。此外,本文展示了所提出的模型对公共数据集上的大型组的适用性。本文为解决AI辅助人类机器人互动面临的挑战提供了解决方案,包括工人异质性,工人动态和工作异质性。
translated by 谷歌翻译
配备了广泛的传感器,主要的自主驾驶解决方案正变得越来越面向安全系统设计。尽管这些传感器已经奠定了坚实的基础,但最新的大多数生产解决方案仍然属于L2阶段。其中,Comma.ai出现在我们的视线中,声称一个售价999美元的售后设备装有单个相机和内部的木板具有处理L2场景的能力。该项目与Comma.ai发布的整个系统的开源软件一起名为OpenPilot。可能吗?如果是这样,它如何成为可能?考虑到好奇心,我们深入研究了OpenPilot,并得出结论,其成功的关键是端到端系统设计,而不是传统的模块化框架。该模型被简要介绍为SuperCombo,它可以从单眼输入中预测自我车辆的未来轨迹和其他道路语义。不幸的是,无法公开提供所有这些工作的培训过程和大量数据。为了进行深入的调查,我们尝试重新实现培训细节并测试公共基准测试的管道。这项工作中提出的重构网络称为“ op-Deepdive”。为了将我们的版本与原始SuperCombo进行公平的比较,我们引入了双模型部署方案,以测试现实世界中的驾驶性能。 Nuscenes,Comma2K19,Carla和内部现实场景的实验结果证明了低成本设备确实可以实现大多数L2功能,并且与原始的SuperCombo模型相当。在本报告中,我们想分享我们的最新发现,并阐明了从工业产品级别方面进行端到端自动驾驶的新观点,并有可能激发社区继续提高绩效。我们的代码,基准在https://github.com/openperceptionx/openpilot-deepdive上。
translated by 谷歌翻译
当前的端到端自动驾驶方法要么基于计划的轨迹运行控制器,要么直接执行控制预测,这已经跨越了两条单独研究的研究线。本文看到了它们彼此的潜在相互利益,主动探讨了这两个发展良好的世界的结合。具体而言,我们的集成方法分别有两个用于轨迹计划和直接控制的分支。轨迹分支可以预测未来的轨迹,而控制分支则涉及一种新颖的多步预测方案,以便可以将当前动作与未来状态之间的关系进行推理。连接了两个分支,因此控制分支在每个时间步骤中从轨迹分支接收相应的指导。然后将来自两个分支的输出融合以实现互补的优势。我们的结果在闭环城市驾驶环境中进行了评估,并使用CARLA模拟器具有挑战性的情况。即使有了单眼相机的输入,建议的方法在官方Carla排行榜上排名第一$,超过了其他具有多个传感器或融合机制的复杂候选人。源代码和数据将在https://github.com/openperceptionx/tcp上公开提供。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译